Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
debakarr
GitHub Repository: debakarr/machinelearning
Path: blob/master/Part 3 - Classification/K Nearest Neighbors/[R] K-Nearest Neighbour.ipynb
1009 views
Kernel: R

K-Nearest Neighbour

Data preprocessing

# Import the dataset dataset = read.csv('Social_Network_Ads.csv') dataset = dataset[, 3:5]
head(dataset, 10)
# Splitting the dataset into the Training set and Test set library(caTools) set.seed(42) split = sample.split(dataset$Purchased, SplitRatio = 0.8) training_set = subset(dataset, split == TRUE) test_set = subset(dataset, split == FALSE)
head(training_set, 10)
head(test_set, 10)
# Feature Scaling training_set[, 1:2] = scale(training_set[, 1:2]) test_set[, 1:2] = scale(test_set[, 1:2])
head(training_set, 10)
head(test_set, 10)

Fitting K-NN to the Training set and Predicting the test set result

library(class) y_pred = knn(train = training_set[, -3], test = test_set[, -3], cl = training_set[, 3], k = 5)
head(y_pred, 10)
head(test_set[, 3], 10)

Predictions are almost correct.


Making the Confusion Matrix

cm = table(test_set[, 3], y_pred) cm
y_pred 0 1 0 47 4 1 5 24

That's awesome. Only 5 + 4 = 9, incorrect prediction and 47 + 24 = 81 correct prediction.


Visualizing the Training set results

# install.packages('ElemStatLearn') library(ElemStatLearn)
set = training_set
X1 = seq(min(set[, 1]) - 1, max(set[, 1]) + 1, by = 0.01) X2 = seq(min(set[, 2]) - 1, max(set[, 2]) + 1, by = 0.01) grid_set = expand.grid(X1, X2) colnames(grid_set) = c('Age', 'EstimatedSalary') y_grid = knn(train = training_set[, -3], test = grid_set, cl = training_set[, 3], k = 5) plot(set[, -3], main = 'K-Nearest Neighbour Classifier (Training set)', xlab = 'Age', ylab = 'Estimated Salary', xlim = range(X1), ylim = range(X2)) contour(X1, X2, matrix(as.numeric(y_grid), length(X1), length(X2)), add = TRUE) points(grid_set, pch = '.', col = ifelse(y_grid == 1, 'springgreen3', 'tomato')) points(set, pch = 21, bg = ifelse(set[, 3] == 1, 'green4', 'red3'), col='white') legend("topright", legend = c("0", "1"), pch = 16, col = c('red3', 'green4'))
Image in a Jupyter notebook

Visualizing the Test set results

set = test_set
X1 = seq(min(set[, 1]) - 1, max(set[, 1]) + 1, by = 0.01) X2 = seq(min(set[, 2]) - 1, max(set[, 2]) + 1, by = 0.01) grid_set = expand.grid(X1, X2) colnames(grid_set) = c('Age', 'EstimatedSalary') prob_set = predict(classifier, type = 'response', newdata = grid_set) y_grid = knn(train = training_set[, -3], test = grid_set, cl = training_set[, 3], k = 5) plot(set[, -3], main = 'K-Nearest Neighbour Classifier (Test set)', xlab = 'Age', ylab = 'Estimated Salary', xlim = range(X1), ylim = range(X2)) contour(X1, X2, matrix(as.numeric(y_grid), length(X1), length(X2)), add = TRUE) points(grid_set, pch = '.', col = ifelse(y_grid == 1, 'springgreen3', 'tomato')) points(set, pch = 21, bg = ifelse(set[, 3] == 1, 'green4', 'red3'), col='white') legend("topright", legend = c("0", "1"), pch = 16, col = c('red3', 'green4'))
Image in a Jupyter notebook

Gist: K-NN is a non Linear Classifier. That's why it predicts so well in our decision making problem.